1 Read this first

This document is intended for audiences internal to the project team for the Boulder City, CO Guaranteed Income project (BGI). The data presented here is simulated or uses publicly available estimates for certain parameters (meaning there are no PII, IRB, or proprietary concerns with this document, it just wasn’t intended for wide circulation).

For team members interested in the process for weighting and selection the key sections are:

  • Section 2

  • Section 6

  • Section 6.2

For team members who are primarily interested in seeing an example of the spreadsheet that tracks each sampling wave, please see Appendix 7. The first table is the condensed version that shows just the IDs for each wave. The second table is the expanded version that shows the demographic characteristics for the individuals in each wave. The data can be downloaded via the clipboard or a csv file.

2 Summary

Steps in the weighting process:

  • 1st) Simulate population dataset based on questions from recruitment form. This represents a rough guess at the total population of those living in 30 - 60% AMI in Boulder City. This fake dataset is based on initial estimates and/or guesses on demographic parameters (including what the parameters should be). This population dataset is just for the purposes of illustration.

  • 2nd) Randomly sample 4000 applicants from the simulated population data.

  • 3rd) Select first ‘wave’ of 200 program selections using two methods: A - a custom weighting procedure B - a purely random sample of 200 selections from the applicant pool

  • 4th) Select second and third waves using propensity score matching against the applicant pool ((Ho et al. 2011))1.

  • 5th) Make list of additional backups to use for additional verification if needed. Define a process for selecting these additional backup selections, based on prioritizing the least represented groups.

  • Last) Make the dataset with selections and backups available for download (see 7).

3 Assumptions

  • There will be enough recruits into the program that we can have multiple waves of selections within the weighting criteria we define.

  • Failures of verification will be ~randomly distributed across groups.

  • For the sake of the simulations and calculations here (which are just for an abstract presentation of the process), assume there will be 4000 applicants, 200 selections, and 200 backups in each of three sampling waves. We are also assuming that the applicant pool is a random selection from the population (which probably won’t be the case in our intended application).

  • For the purposes of weighting, assume groups are independent. That is, we have estimates for the proportion of the population by racial category and we use these weights to make a random selection, likewise with gender, and disability, etc.

4 Requirements

  • Ideally, make all matches based on estimates of population in Boulder City who are either a) between 30 and 60 % of area median income (AMI) or b) below poverty line. Option a is preferable - b is backup if we encounter data limilations.

  • Proportionate match by race/ethnicty, gender identity, and disability status.

  • Individuals with children under 18 should be represented in the program at ~2xs their representation in the application pool.

5 Questionaire info

The eligibility questionnaire will have questions on each of the above, plus additional eligibility and other characteristics not addressed here.

Ethnicity/race options:

  • Hispanic/Latino

  • Black or African American

  • White (not Latino)

  • Asian

  • 2 or more races

  • Not listed

  • Native Hawaiian/Pacific Islander

  • American Indian/Alaskan Native

Gender:

  • Woman

  • Man

  • Transgender

  • Prefer to self identify (please write in your preferred identity here)

Households with children under 18

  • Yes

  • No

Disability status:

  • Yes
  • No

6 Estimates

This table shows the probabilities that we are working with in the current iteration of our fake data. These are based on empirical estimates.

Race and ethnicity: estimates derived from this City of Boulder online source. The poverty measure is the US census bureau’s definition of poverty.

Gender: estimates derived from the Williams Institute and are based on the entire state of CO. We only have estimates for percent transgender.

Children in household: estimates will be based on the applicant pool (the values in the table are placeholders).

Disability: studies have shown that about 25% of the population is disabled at any give time and we will use this as our background expectation for Boulder City.

Table 6.1: Parameters for weighting
sub_group target_props
race_ethnicity
White (not latino) 0.756
Hispanic 0.100
Black or African American 0.014
Asian 0.051
American Indian or Alaska Native 0.002
Native Hawaiian or Other Pacific Islander 0.001
Not Listed 0.021
Two or more 0.055
gender
Woman 0.440
Man 0.440
Transgender 0.060
Prefer to self identify 0.060
child_household
No 0.800
Yes 0.200
disability
No 0.750
Yes 0.250

This table shows the sums across sub-groups as an initial internal check. They should generally sum to 1. The values for child household have already been manipulated to ensure twice as many households with children are included.

Table 6.2: Proportions for each group (should = 1, a simple comprehension check
group group_sum
child_household 1
disability 1
gender 1
race_ethnicity 1

6.1 Sim data

6.1.1 Population

Fake data for an arbitrary notion of the ‘total population’. This means all the people in Boulder living between 30 and 60% AMI. Right now this is 25000 people.

A few example rows from the simulated population sample:

Table 6.3: Sample rows from our fake data
id race_ethnicity gender child_household disability
607 White (not latino) Man No Yes
18385 Hispanic Woman No Yes
11269 Not Listed Woman No No
8909 White (not latino) Man No No
18190 White (not latino) Man No Yes
18374 White (not latino) Man No No
1018 Hispanic Man No No
3145 White (not latino) Prefer to self identify No No
23489 White (not latino) Woman Yes No
8901 White (not latino) Prefer to self identify Yes No

6.1.2 Enrollees

Randomly select 4000 from the population.

Table 6.4: Proportions in randomly selected enrollee data
sub_group count proportions target_proportions
child_household
No 3225 0.806 0.800
Yes 775 0.194 0.200
disability
No 2927 0.732 0.750
Yes 1073 0.268 0.250
gender
Man 1734 0.434 0.440
Prefer to self identify 245 0.061 0.060
Transgender 249 0.062 0.060
Woman 1772 0.443 0.440
race_ethnicity
American Indian or Alaska Native 5 0.001 0.002
Asian 198 0.050 0.051
Black or African American 50 0.013 0.014
Hispanic 423 0.106 0.100
Native Hawaiian or Other Pacific Islander 3 0.001 0.001
Not Listed 66 0.016 0.021
Two or more 213 0.053 0.055
White (not latino) 3042 0.760 0.756

Note: as a reminder/clarifier, in the above table the ‘proportions’ column is what we observe when we select 4000 rows/individuals from our simulated population data. The target_proportions are the values used to simulate the population data. These values will generally be very similar because when you sample a large-ish population at random you will mostly tend to maintain the proportions of its characteristic parts. No weighting is applied at this step because we assume that those who apply to the program are something like a random sample of all those who could apply (the ‘population’).

6.1.3 Select sample 1

To select the first sample wave of 200 individuals from our 4000 applicant pool we first take a weighted sample of the data using the target proportions in Table 6.4.

The weighting procedure:

  • calculate the expected number of individuals in a sample of 200 if they were in the sample at exactly their expected proportions.

    • For any cases where the expected number of people is less than one person, round up to one person. This seems like a small effect but consider that for a rare characteristic we might expect 0.2 people to have that characteristics in a sample of
      1. By rounding up to one we have increased the odds that someone with this characteristic gets selected by 5xs.
    • Calculate the number of people in the applicant pool who have children under the age of 18 and double the expected number.
  • For any individuals that have expected counts <= 3, add three to their expected count. This is another way of increasing proportionate representation of rare characteristics.

  • Take a random sample of 25% of the target sample size of 200 and reserve this for individuals with rare characteristics. These are defined by examining the applicant pool and simply counting the characteristics of all the people in the pool. The sample of 50 (25% of 200 of the rarest 50% of characteristics within each group are reserved for inclusion in the final selected sample.

  • The remaining 75% are chosen by a simple weighting from the enrollee pool.

  • Lastly, if any characteristics are present in the enrollee pool but still missing the selected sample, select one person at random with that characteristic and replace someone chosen at random with the most common set of characteritics.

The target proportions in Table 6.4 are based on characteristics of participants, so this first step in the sampling selects more than 200. We then select 200 people for the first sampling wave using the procedure just described.

Table 6.5: Comparing a random sample to custom weighted sample
sub_group props target_counts count_rand proportions_rand count_w proportions_w
race_ethnicity
Native Hawaiian or Other Pacific Islander 0.001 1 NA NA 1 0.005
American Indian or Alaska Native 0.002 1 NA NA 2 0.010
Black or African American 0.014 3 3 0.015 4 0.020
Not Listed 0.021 4 2 0.010 2 0.010
Asian 0.051 10 8 0.040 13 0.065
Two or more 0.055 11 11 0.055 18 0.090
Hispanic 0.100 20 22 0.110 18 0.090
White (not latino) 0.756 151 154 0.770 142 0.710
gender
Transgender 0.060 12 12 0.060 23 0.115
Prefer to self identify 0.060 12 12 0.060 24 0.120
Man 0.440 88 85 0.425 65 0.325
Woman 0.440 88 91 0.455 88 0.440
child_household
Yes 0.200 40 49 0.245 54 0.270
No 0.800 160 151 0.755 146 0.730
disability
Yes 0.250 50 52 0.260 40 0.200
No 0.750 150 148 0.740 160 0.800

6.1.4 Select samples 2 and 3

The second wave selection works by taking the wave 1 selection and then using an algorithm to find each individuals closest match from the 3800 individuals remaining in the applicant pool. This is done using a technique called propensity score matching (Ho et al. 2011).

The third wave of sampled individuals is done with the same process.

6.2 Viz the waves

6.2.1 Applicant data and population estimates

First, lets compare the population data to the applicant data:

Table 6.6: Results across three sampling waves
sub_group target_props target_counts count_w1 props_w1 count_w2 props_w2 count_w3 props_w3
race_ethnicity
Native Hawaiian or Other Pacific Islander 0.001 1 1 0.005 2 0.010 NA NA
American Indian or Alaska Native 0.002 1 2 0.010 NA NA NA NA
Black or African American 0.014 3 4 0.020 5 0.025 7 0.035
Not Listed 0.021 4 2 0.010 2 0.010 2 0.010
Asian 0.051 10 13 0.065 13 0.065 13 0.065
Two or more 0.055 11 18 0.090 18 0.090 18 0.090
Hispanic 0.100 20 18 0.090 18 0.090 18 0.090
White (not latino) 0.756 151 142 0.710 142 0.710 142 0.710
gender
Transgender 0.060 12 23 0.115 23 0.115 23 0.115
Prefer to self identify 0.060 12 24 0.120 25 0.125 27 0.135
Man 0.440 88 65 0.325 64 0.320 63 0.315
Woman 0.440 88 88 0.440 88 0.440 87 0.435
child_household
Yes 0.200 40 54 0.270 55 0.275 54 0.270
No 0.800 160 146 0.730 145 0.725 146 0.730
disability
Yes 0.250 50 40 0.200 38 0.190 38 0.190
No 0.750 150 160 0.800 162 0.810 162 0.810
Proportions by race group in simulated population data.

Figure 6.1: Proportions by race group in simulated population data.

Proportions by gender in simulated population data.

Figure 6.2: Proportions by gender in simulated population data.

We can examine just the race and gender breakdowns, above, to see that randomly sampling 4000 individuals from our population of 25000 leads to proportions in each group that are fairly similar.

6.2.2 Proportions in each sampling wave

Next, we can see how the proportions in each sampling wave compare to the ‘target’ proportions in the population data:

6.2.2.1 Race by sampling wave

Proportions by racial grouping, sampling waves.

Figure 6.3: Proportions by racial grouping, sampling waves.

6.2.2.2 Gender by sampling wave

Proportions by gender, sampling waves.

Figure 6.4: Proportions by gender, sampling waves.

6.2.2.3 Child in household by sampling wave

Proportions of households with a child in the home, by sampling wave.

Figure 6.5: Proportions of households with a child in the home, by sampling wave.

6.2.2.4 Disability status by sampling wave

Proportions by disability status, sampling wave.

Figure 6.6: Proportions by disability status, sampling wave.

7 Appendix A: Example datasets

The first example dataset presents a column for each sampling wave. The intended use is that all the individuals in the far left column, Wave 1, are selected to the program for verification. If some of these individuals cannot be verified, their replacement is the cell in the same row immediately to the right, in the Wave 2 column. If someone in Wave 3 cannot be verified, then proceed to Wave 3.

Table 7.1: Suggested format for the ‘simple’ version of the sample waves using the example data generated above.
Table 7.2: Suggested format for the extended, wide, version of the sample waves using the example data generated above showing all attributes for each matched sample wave.

References

Ho, Daniel E., Kosuke Imai, Gary King, and Elizabeth A. Stuart. 2011. MatchIt: Nonparametric Preprocessing for Parametric Causal Inference” 42. https://doi.org/10.18637/jss.v042.i08.

  1. Propensity score matching is a technique often used in quasi-experimental designs for statistically matching members of a treatment group to members of a control group. In our case, we use the same kind of algorithm to match each participant in sampling waves 2 and 3 with their most similar counter part in the applicant pool.↩︎